Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[documentation] Add a tutorial for LBFGS #404

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

frapac
Copy link
Member

@frapac frapac commented Jan 27, 2025

  • add a small tutorial to use LBFGS algorithm in MadNLP
  • fix bug in LBFGS: the options were not passed correctly to the quasi-Newton solver.

cc @amontoison @blegat

Solve #400

@frapac frapac requested review from sshin23 and amontoison January 27, 2025 21:18
Copy link
Member

@amontoison amontoison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Maybe you could add a comment about LBFGSB for completeness?

Hence, the problem is good candidate for a quasi-Newton algorithm.

We start by solving the problem with the default options in MadNLP,
using the dense linear solver Lapack:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
using the dense linear solver Lapack:
using a dense linear solver from LAPACK:

@amontoison amontoison changed the title [documentation] Add tutorial for LBFGS [documentation] Add a tutorial for LBFGS Jan 27, 2025
@amontoison
Copy link
Member

```
with ``\xi > 0`` a scaling factor, ``U_k`` and ``V_k`` two ``n \times 2p`` matrices.
The number ``p`` denotes the number of vectors used when computing the limited memory updates
(the parameter ``max_history`` in MadNLP): the larger, the more accurate is the low-rank approximation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
(the parameter ``max_history`` in MadNLP): the larger, the more accurate is the low-rank approximation.
(the parameter `max_history` in MadNLP): the larger, the more accurate is the low-rank approximation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capture d’écran du 2025-01-27 20-05-41


!!! info
As MadNLP is designed to solve constrained optimization problems,
it does not approximate the inverse of the Hessian matrix, as it is done
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
it does not approximate the inverse of the Hessian matrix, as it is done
it does not approximate the inverse of the Hessian matrix, as done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants